Nanodegree key: nd101
Version: 1.0.0
Locale: en-us
Learn about foundational topics in the exciting field of deep learning, the technology behind state-of-the-art artificial intelligence.
Content
Part 01 : Neural Networks
Neural network is the bedrock to deep learning. In this section, you’ll learn how it works and test your ability by building a neural network from scratch.
-
Module 01: Week 1 - Introduction
-
Lesson 01: Welcome
Welcome to the Deep Learning Nanodegree Foundations Program! In this lesson, you'll meet your instructors, find out about the field of Deep Learning, and learn how to make the most of the resources Udacity provides.
- Concept 01: Welcome to the Deep Learning Nanodegree Foundations
- Concept 02: Projects You Will Build
- Concept 03: Meet Your Instructors
- Concept 04: The First Week
- Concept 05: Prerequisites
- Concept 06: Community Support
- Concept 07: Deadline Policy
- Concept 08: We Value Your Feedback
- Concept 09: Getting Set Up
-
Lesson 02: Anaconda
Anaconda is a package and environment manager built specifically for data. Learn how to use Anaconda to improve your data analysis workflow.
-
Lesson 03: Jupyter Notebooks
Learn how to use Jupyter Notebooks to create documents combining code, text, images, and more.
- Concept 01: Instructor
- Concept 02: What are Jupyter notebooks?
- Concept 03: Installing Jupyter Notebook
- Concept 04: Launching the notebook server
- Concept 05: Notebook interface
- Concept 06: Code cells
- Concept 07: Markdown cells
- Concept 08: Keyboard shortcuts
- Concept 09: Magic keywords
- Concept 10: Converting notebooks
- Concept 11: Creating a slideshow
- Concept 12: Finishing up
-
Lesson 04: Applying Deep Learning
In this lesson, you'll get your hands dirty by playing around with a few examples of deep learning. Don't worry if you don't understand what's going on! The goal here is just for you to play around with some models others have already created and have fun.
-
-
Module 02: Week 1 - Regression
-
Lesson 01: Regression
Learn about linear regression and logistic regression models. These simple machine learning models are the building blocks of neural networks.
-
-
Module 03: Week 2 - Neural Networks
-
Lesson 01: Matrix Math and NumPy Refresher
In this lesson, you'll review the matrix math you'll need to understand to build your neural networks. You'll also explore NumPy, the library you'll use to efficiently deal with matrices in Python.
- Concept 01: Introduction
- Concept 02: Data Dimensions
- Concept 03: Data in NumPy
- Concept 04: Element-wise Matrix Operations
- Concept 05: Element-wise Operations in NumPy
- Concept 06: Matrix Multiplication: Part 1
- Concept 07: Matrix Multiplication: Part 2
- Concept 08: NumPy Matrix Multiplication
- Concept 09: Matrix Transposes
- Concept 10: Transposes in NumPy
- Concept 11: NumPy Quiz
-
Lesson 02: Intro to Neural Networks
In this lesson, you'll dive deeper into the intuition behind Logistic Regression and Neural Networks. You'll also implement gradient descent and backpropagation in python right here in the classroom.
- Concept 01: Introducing Luis
- Concept 02: Logistic Regression Quiz
- Concept 03: Logistic Regression Answer
- Concept 04: Neural Networks
- Concept 05: Perceptron
- Concept 06: AND Perceptron Quiz
- Concept 07: OR & NOT Perceptron Quiz
- Concept 08: XOR Perceptron Quiz
- Concept 09: The Simplest Neural Network
- Concept 10: Gradient Descent
- Concept 11: Gradient Descent: The Math
- Concept 12: Gradient Descent: The Code
- Concept 13: Implementing Gradient Descent
- Concept 14: Multilayer Perceptrons
- Concept 15: Backpropagation
- Concept 16: Implementing Backpropagation
- Concept 17: Further Reading
-
Lesson 03: Your first neural network
In this project, you'll build and train your own Neural Network from scratch to predict the number of bikeshare users on a given day. Good luck!
-
Part 02 : Convolutional Neural Networks
Convolutional neural network is the standard for solving vision problems. It’s used in self driving cars, face recognition, medical imaging, and a whole lot more! You’ll learn how this neural network works and apply to a image classification problem.
-
Module 01: Week 3 - Model Evaluation and Validation
-
Lesson 01: Model Evaluation and Validation
In this lesson, you'll learn some of the basics of training models. You'll learn the power of testing and cross validation, and some interesting metrics to evaluate models, such as accuracy or R2 score.
-
-
Module 02: Week 3 - Sentiment Analysis
-
Lesson 01: Sentiment Analysis with Andrew Trask
In this lesson, Andrew Trask, the author of Grokking Deep Learning, will walk you through using neural networks for sentiment analysis. In particular, you'll build a network that classifies movie reviews as positive or negative just based on their text!
- Concept 01: Introducing Andrew Trask
- Concept 02: Meet Andrew
- Concept 03: Materials
- Concept 04: Framing the Problem
- Concept 05: Mini Project 1
- Concept 06: Mini Project 1 Solution
- Concept 07: Transforming Text into Numbers
- Concept 08: Mini Project 2
- Concept 09: Mini Project 2 Solution
- Concept 10: Building a Neural Network
- Concept 11: Mini Project 3
- Concept 12: Mini Project 3 Solution
- Concept 13: Understanding Neural Noise
- Concept 14: Mini Project 4
- Concept 15: Understanding Inefficiencies in our Network
- Concept 16: Mini Project 5
- Concept 17: Mini Project 5 Solution
- Concept 18: Further Noise Reduction
- Concept 19: Mini Project 6
- Concept 20: Mini Project 6 Solution
- Concept 21: Analysis: What's Going on in the Weights?
- Concept 22: Conclusion
-
Lesson 02: Intro to TFLearn
In this lesson, you'll get to use TFLearn, an incredibly simple library for building neural networks. You'll get to apply TFLearn on both a sentiment analysis assignment as well as an image classification assignment.
- Concept 01: Welcome to this lesson!
- Concept 02: ReLU and Softmax Activation Functions
- Concept 03: Categorical Cross-Entropy
- Concept 04: Sentiment Analysis with TFLearn
- Concept 05: Sentiment Analysis Solution
- Concept 06: Handwritten Digit Recognition
- Concept 07: Handwritten Digit Recognition Solution
-
Lesson 03: Preparing for Siraj's Lesson
In this lesson, we'll cover representing words as inputs to a network. We'll cover Bag of Words as well as Word2Vec. In addition, we'll share a few resources to introduce RNNs and LSTMs.
-
-
Module 03: Week 4 - Math Notations
-
Lesson 01: MiniFlow
In this lesson, you'll build your own small version of TensorFlow, called MiniFlow. By building this, you'll gain an understanding of how TensorFlow works under the hood, and gain more insights into important concepts like backpropagation.
- Concept 01: Welcome to MiniFlow
- Concept 02: Graphs
- Concept 03: MiniFlow Architecture
- Concept 04: Forward Propagation
- Concept 05: Forward Propagation Solution
- Concept 06: Learning and Loss
- Concept 07: Linear Transform
- Concept 08: Sigmoid Function
- Concept 09: Cost
- Concept 10: Cost Solution
- Concept 11: Gradient Descent
- Concept 12: Backpropagation
- Concept 13: Stochastic Gradient Descent
- Concept 14: SGD Solution
- Concept 15: Outro
-
-
Module 04: Week 5 - Intro to TensorFlow
-
Lesson 01: Cloud Computing
Take advantage of Amazon's GPUs to train your neural network faster. In this lesson, you'll setup an instance on AWS and train a neural network on a GPU.
-
Lesson 02: Intro to TensorFlow
Vincent Vanhoucke, Principal Scientist at Google Brain, introduces you to deep learning and Tensorflow, Google's deep learning framework.
- Concept 01: Intro to Vincent
- Concept 02: What is Deep Learning
- Concept 03: Solving Problems - Big and Small
- Concept 04: Let's Get Started
- Concept 05: Installing TensorFlow
- Concept 06: Hello, Tensor World!
- Concept 07: Transition to Classification
- Concept 08: Supervised Classification
- Concept 09: Training Your Logistic Classifier
- Concept 10: Quiz: TensorFlow Linear Function
- Concept 11: ReLU and Softmax Activation Functions
- Concept 12: Quiz: TensorFlow Softmax
- Concept 13: One-Hot Encoding
- Concept 14: Categorical Cross-Entropy
- Concept 15: Quiz: TensorFlow Cross Entropy
- Concept 16: Minimizing Cross Entropy
- Concept 17: Practical Aspects of Learning
- Concept 18: Quiz: Numerical Stability
- Concept 19: Normalized Inputs and Initial Weights
- Concept 20: Measuring Performance
- Concept 21: Optimizing a Logistic Classifier
- Concept 22: Stochastic Gradient Descent
- Concept 23: Momentum and Learning Rate Decay
- Concept 24: Parameter Hyperspace
- Concept 25: Quiz: Mini-batch
- Concept 26: Epochs
- Concept 27: Lab: TensorFlow Neural Network
-
-
Module 05: Week 6 - Image Classification
-
Lesson 01: Deep Neural Networks
Vincent walks you through how to go from a simple neural network to a deep neural network. You'll learn about why additional layers can help and how to prevent overfitting.
- Concept 01: Intro to Deep Neural Networks
- Concept 02: Two-Layer Neural Network
- Concept 03: Quiz: TensorFlow ReLUs
- Concept 04: Deep Neural Network in TensorFlow
- Concept 05: Training a Deep Learning Network
- Concept 06: Save and Restore TensorFlow Models
- Concept 07: Finetuning
- Concept 08: Regularization Intro
- Concept 09: Regularization
- Concept 10: Regularization Quiz
- Concept 11: Dropout
- Concept 12: Dropout Pt. 2
- Concept 13: Quiz: TensorFlow Dropout
-
Lesson 02: Convolutional Networks
Vincent explains the theory behind Convolutional Neural Networks and how they help us dramatically improve performance in image classification.
- Concept 01: Intro To CNNs
- Concept 02: Color
- Concept 03: Statistical Invariance
- Concept 04: Convolutional Networks
- Concept 05: Intuition
- Concept 06: Filters
- Concept 07: Feature Map Sizes
- Concept 08: Convolutions continued
- Concept 09: Parameters
- Concept 10: Quiz: Convolution Output Shape
- Concept 11: Solution: Convolution Output Shape
- Concept 12: Quiz: Number of Parameters
- Concept 13: Solution: Number of Parameters
- Concept 14: Quiz: Parameter Sharing
- Concept 15: Solution: Parameter Sharing
- Concept 16: Visualizing CNNs
- Concept 17: TensorFlow Convolution Layer
- Concept 18: Explore The Design Space
- Concept 19: TensorFlow Max Pooling
- Concept 20: Quiz: Pooling Intuition
- Concept 21: Solution: Pooling Intuition
- Concept 22: Quiz: Pooling Mechanics
- Concept 23: Solution: Pooling Mechanics
- Concept 24: Quiz: Pooling Practice
- Concept 25: Solution: Pooling Practice
- Concept 26: Quiz: Average Pooling
- Concept 27: Solution: Average Pooling
- Concept 28: 1x1 Convolutions
- Concept 29: Inception Module
- Concept 30: Convolutional Network in TensorFlow
- Concept 31: TensorFlow Convolution Layer
- Concept 32: Solution: TensorFlow Convolution Layer
- Concept 33: TensorFlow Pooling Layer
- Concept 34: Solution: TensorFlow Pooling Layer
- Concept 35: CNNs - Additional Resources
-
Lesson 03: Siraj's Image Classification
Siraj will go over the history of image classification, then he'll dive into the concepts behind convolutional networks and why they are so amazing.
He'll also create an Image Classifier for cats & dogs in 40 lines of Python! -
Lesson 04: Image Classification
Classify images from the CIFAR-10 dataset using a convolutional neural network.
-
Part 03 : Recurrent Neural Networks
Recurrent neural network is great for predicting on sequential data like music and text. With this neural network, you can generate new music, translate a language, or predict a seizure using an electroencephalogram. This section will teach you how to build and train a recurrent neural.
-
Module 01: Week 7 - Recurrent Neural Networks
-
Lesson 01: Intro to Recurrent Neural Networks
Recurrent neural networks are able to learn from sequences of data. In this lesson, you'll learn the concepts behind recurrent networks and see how a character-wise recurrent network is implemented in TensorFlow.
- Concept 01: Intro to RNNs
- Concept 02: LSTMs
- Concept 03: Character-wise RNNs
- Concept 04: Sequence Batching
- Concept 05: Character-wise RNN Notebook
- Concept 06: Implementing a Character-wise RNN
- Concept 07: Batching Data Solution
- Concept 08: LSTM Cell
- Concept 09: LSTM Cell Solution
- Concept 10: RNN Output
- Concept 11: Network Loss
- Concept 12: Output and Loss Solutions
- Concept 13: Build the Network
- Concept 14: Build the Network Solution
- Concept 15: RNN Resources
-
Lesson 02: Siraj's Stock Prediction
Siraj is going to predict the closing price of the S&P 500 using a special type of recurrent neural network called an LSTM network. He'll explain why we use recurrent nets for time series data, and why LSTMs boost our network's memory power.
-
Lesson 03: Hyperparameters
In this lesson, we'll look at number of different hyperparameter that are important for our deep learning work. We'll discuss starting values and intuitions for tuning each hyperparameter.
- Concept 01: Introducing Jay
- Concept 02: Introduction
- Concept 03: Learning Rate
- Concept 04: Learning Rate
- Concept 05: Minibatch Size
- Concept 06: Number of Training Iterations / Epochs
- Concept 07: Number of Hidden Units / Layers
- Concept 08: RNN Hyperparameters
- Concept 09: RNN Hyperparameters
- Concept 10: Sources & References
-
-
Module 02: Week 8 - Art Generation
-
Lesson 01: Embeddings and Word2vec
In this lesson, you'll learn about embeddings in neural networks by implementing the word2vec model.
-
Lesson 02: Siraj's Style Transfer
Siraj will show you how to generate art with deep learning networks. He'll also talk about the history of computer generated art, and why deep learning models are so good at making art.
-
Lesson 03: Q&A with FloydHub Founders
A Q&A Session with Sai Soundararaj and Naren Thiagarajan from FloydHub.
-
-
Module 03: Week 9 - Music Generation
-
Lesson 01: TensorBoard
In this lesson you'll learn about using TensorBoard to inspect your network. You can view the TensorFlow graph and the distributions of variables in the network. You can also use it to compare multiple training runs with different hyperparameters.
-
Lesson 02: Siraj's Music Generation
Siraj is going to build a music generating neural network trained on jazz songs in Keras. He'll go over the history of algorithmic generation, then he'll walk step by step through the process of how LSTM networks help us generate music.
-
-
Module 04: Week 10 - Language Generation
-
Lesson 01: Siraj's Text Summarization
Siraj will teach you how to transform an essay into a single sentence using Keras, and talk about word embeddings, encoder-decoder architectures and the role of attention in the theory of learning.
-
Lesson 02: Weight Initialization
In this lesson, you'll learn how to find good initial weights for a neural network. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker.
-
Lesson 04: Generate TV Scripts
Generate a TV script using a recurrent neural network.
-
-
Module 05: Week 11 - Transfer Learning
-
Lesson 01: Transfer Learning in TensorFlow
In practice, most people don't train their own networks on massive datasets. In this lesson, you'll learn how to use a pretrained network on a new problem with transfer learning.
-
Lesson 02: Siraj's Language Translation
Siraj will build his own language translator using Tensorflow! He'll go over several translation methods and talk about how Google Translate is able to achieve state of the art performance.
-
-
Module 06: Sequence to Sequence
-
Lesson 01: Sequence to Sequence
Here you'll learn about a specific architecture of RNNs for generating one sequence from another sequence. These RNNs are useful for chatbots, machine translation, and more!
- Concept 01: Introducing Jay Alammar
- Concept 02: Jay Introduction
- Concept 03: Applications
- Concept 04: Architectures
- Concept 05: Architectures in More Depth
- Concept 06: Preprocessing
- Concept 07: Sequence to sequence in TensorFlow
- Concept 08: Inputs
- Concept 09: Further Reading
- Concept 10: Sequence to Sequence in TensorFlow
-
Lesson 02: Siraj's Chatbot
Siraj will make a Question Answering chatbot using Dynamic Memory Network. He'll go over different chatbot methodologies, then dive into how memory networks work, with accompanying code in Keras.
-
-
Module 07: Language Translation
-
Lesson 01: Reinforcement Learning
Here you'll learn about using reinforcement learning to train an AI agent to play games.
-
Lesson 02: Siraj's Reinforcement Learning
Siraj will show how to solve the multi-armed bandit problem (maximizing success for a given slot machine) using a reinforcement learning technique called policy gradients.
-
Lesson 03: Translation Project
Use a neural network to translate from one language to another.
-
-
Module 08: Autoencoders
-
Lesson 01: Siraj's Image Generation
Siraj is going to build a variational autoencoder capable of generating novel images after being trained on a collection of images, using handwritten digit images as training data.
-
Lesson 02: Autoencoders
Autoencoders are neural networks used for data compression, image denoising, and dimensionality reduction. Here, you'll build autoencoders using TensorFlow.
-
Part 04 : Generative Adversarial Networks
Generative adversarial networks are a type of unsupervised learning where two neural networks compete against each other. This is commonly used to generate image data. You’ll learn how to build your own generative adversarial network and pit two neural networks against each other.
-
Module 01: Week 15 - Generative Adversarial Networks
-
Lesson 01: Generative Adversarial Networks
Ian Goodfellow, the inventor of GANs, introduces you to these exciting models. You'll also implement your own GAN on the MNIST dataset.
- Concept 01: Introducing Ian Goodfellow
- Concept 02: What can you do with GANs?
- Concept 03: How GANs work
- Concept 04: Games and Equilibria
- Concept 05: Practical tips and tricks for training GANs
- Concept 06: Build a GAN
- Concept 07: Get started with a GAN
- Concept 08: Generator Network
- Concept 09: Discriminator Network
- Concept 10: Generator and Discriminator Solutions
- Concept 11: Building the Network
- Concept 12: Building the Network Solution
- Concept 13: Training Losses
- Concept 14: Training Optimizers
- Concept 15: Training Losses and Optimizers Solution
- Concept 16: A Trained GAN
- Concept 17: Doing More With Your GAN
-
Lesson 02: Siraj's Video Generation
Siraj is going to use a Deep Convolutional GAN to generate images of the alien language from the movie arrival that we can then stitch together to animate into video.
-
-
Module 02: Week 17 - Deep Convolutional GANs
-
Lesson 01: Siraj's One-Shot Learning
See how memory augmented neural networks can help achieve one-shot classification for a small labeled dataset. He'll also review the architecture of it's inspiration (the neural turing machine)
-
Lesson 02: Hyperparameters
In this lesson, we'll look at number of different hyperparameter that are important for our deep learning work. We'll discuss starting values and intuitions for tuning each hyperparameter.
- Concept 01: Introducing Jay
- Concept 02: Introduction
- Concept 03: Learning Rate
- Concept 04: Learning Rate
- Concept 05: Minibatch Size
- Concept 06: Number of Training Iterations / Epochs
- Concept 07: Number of Hidden Units / Layers
- Concept 08: RNN Hyperparameters
- Concept 09: RNN Hyperparameters
- Concept 10: Sources & References
-
Lesson 03: Deep Convolutional GANs
In this lesson you'll implement a Deep Convolution GAN to generate complex color images of house numbers.
- Concept 01: Deep Convolutional GANs
- Concept 02: DCGAN Architecture
- Concept 03: Batch Normalization
- Concept 04: DCGAN Implementation
- Concept 05: DCGAN and the Generator
- Concept 06: Generator Solution
- Concept 07: Discriminator
- Concept 08: Discriminator Solution
- Concept 09: Building and Training the Network
- Concept 10: Hyperparameter Solutions
-
Lesson 04: Generate Faces
Compete two neural networks against each other to generate realistic faces.
-
Lesson 05: Semi-Supervised Learning
Ian Goodfellow leads you through a semi-supervised GAN model, a classifier that can learn from mostly unlabeled data.
- Concept 01: Semi-supervised Learning
- Concept 02: Semi-Supervised Classification with GANs
- Concept 03: Introducing Semi-Supervised Learning
- Concept 04: Data Prep
- Concept 05: Building The Generator And Discriminator
- Concept 06: Model Loss Exercise
- Concept 07: Model Optimization Exercise
- Concept 08: Training The Network
- Concept 09: Discriminator Solution
- Concept 10: Model Loss Solution
- Concept 11: Model Optimizer Solution
- Concept 12: Trained Semi-Supervised GAN
-
Part 05 : Guaranteed Admission into your next Nanodegree
Utilize your guaranteed admission and enroll into a Career-Ready Nanodegree Program
-
Module 01: Guaranteed Admission into your next Nanodegree
-
Lesson 01: Enroll in your next Nanodegree program
Utilize your guaranteed admission and enroll into a Career-Ready Nanodegree program
-
Part 06 (Elective): Introductions
Get introduced to the Nanodegree Foundation program, as well as cover some basics to get you up to speed.
-
Module 01: Lessons
-
Lesson 01: Welcome to Deep Learning
Welcome to the Deep Learning Nanodegree program!
- Concept 01: Welcome to the Deep Learning Nanodegree Program
- Concept 02: Meet Your Instructors
- Concept 03: Learning Plan
- Concept 04: Program Structure
- Concept 05: Projects You Will Build
- Concept 06: Deadline Policy
- Concept 07: Udacity Support
- Concept 08: Community Guidelines
- Concept 09: Prerequisites
- Concept 10: Getting Set Up
-
Lesson 02: Applying Deep Learning
In this lesson, you'll get your hands dirty by playing around with a few examples of deep learning. Don't worry if you don't understand what's going on! The goal here is just for you to play around with some models others have already created and have fun.
-
Lesson 03: Anaconda
Anaconda is a package and environment manager built specifically for data. Learn how to use Anaconda to improve your data analysis workflow.
-
Lesson 04: Jupyter Notebooks
Learn how to use Jupyter Notebooks to create documents combining code, text, images, and more.
- Concept 01: Instructor
- Concept 02: What are Jupyter notebooks?
- Concept 03: Installing Jupyter Notebook
- Concept 04: Launching the notebook server
- Concept 05: Notebook interface
- Concept 06: Code cells
- Concept 07: Markdown cells
- Concept 08: Keyboard shortcuts
- Concept 09: Magic keywords
- Concept 10: Converting notebooks
- Concept 11: Creating a slideshow
- Concept 12: Finishing up
-
Lesson 05: Matrix Math and NumPy Refresher
In this lesson, you'll review the matrix math you'll need to understand to build your neural networks. You'll also explore NumPy, the library you'll use to efficiently deal with matrices in Python.
- Concept 01: Introduction
- Concept 02: Data Dimensions
- Concept 03: Data in NumPy
- Concept 04: Element-wise Matrix Operations
- Concept 05: Element-wise Operations in NumPy
- Concept 06: Matrix Multiplication: Part 1
- Concept 07: Matrix Multiplication: Part 2
- Concept 08: NumPy Matrix Multiplication
- Concept 09: Matrix Transposes
- Concept 10: Transposes in NumPy
- Concept 11: NumPy Quiz
-
Part 07 (Elective): Neural Networks
Neural networks are the bedrock to deep learning. In this section, you’ll learn how they work and test your ability by building a neural network from scratch.
-
Module 01: Lessons
-
Lesson 01: Introduction to Neural Networks
In this lesson, Luis will give you solid foundations on deep learning and neural networks. You'll also implement gradient descent and backpropagation in python right here in the classroom.
- Concept 01: Instructor
- Concept 02: Introduction
- Concept 03: Classification Problems 1
- Concept 04: Classification Problems 2
- Concept 05: Linear Boundaries
- Concept 06: Higher Dimensions
- Concept 07: Perceptrons
- Concept 08: Why "Neural Networks"?
- Concept 09: Perceptrons as Logical Operators
- Concept 10: Perceptron Trick
- Concept 11: Perceptron Algorithm
- Concept 12: Non-Linear Regions
- Concept 13: Error Functions
- Concept 14: Log-loss Error Function
- Concept 15: Discrete vs Continuous
- Concept 16: Softmax
- Concept 17: One-Hot Encoding
- Concept 18: Maximum Likelihood
- Concept 19: Maximizing Probabilities
- Concept 20: Cross-Entropy 1
- Concept 21: Cross-Entropy 2
- Concept 22: Multi-Class Cross Entropy
- Concept 23: Logistic Regression
- Concept 24: Gradient Descent
- Concept 25: Logistic Regression Algorithm
- Concept 26: Pre-Lab: Gradient Descent
- Concept 27: Notebook: Gradient Descent
- Concept 28: Perceptron vs Gradient Descent
- Concept 29: Continuous Perceptrons
- Concept 30: Non-linear Data
- Concept 31: Non-Linear Models
- Concept 32: Neural Network Architecture
- Concept 33: Feedforward
- Concept 34: Backpropagation
- Concept 35: Pre-Lab: Analyzing Student Data
- Concept 36: Notebook: Analyzing Student Data
- Concept 37: Outro
-
Lesson 02: Implementing Gradient Descent
Mat will introduce you to a different error function and guide you through implementing gradient descent using numpy matrix multiplication.
- Concept 01: Mean Squared Error Function
- Concept 02: Gradient Descent
- Concept 03: Gradient Descent: The Math
- Concept 04: Gradient Descent: The Code
- Concept 05: Implementing Gradient Descent
- Concept 06: Multilayer Perceptrons
- Concept 07: Backpropagation
- Concept 08: Implementing Backpropagation
- Concept 09: Further Reading
-
Lesson 03: Training Neural Networks
Now that you know what neural networks are, in this lesson you will learn several techniques to improve their training.
- Concept 01: Instructor
- Concept 02: Training Optimization
- Concept 03: Testing
- Concept 04: Overfitting and Underfitting
- Concept 05: Early Stopping
- Concept 06: Regularization
- Concept 07: Regularization 2
- Concept 08: Dropout
- Concept 09: Local Minima
- Concept 10: Random Restart
- Concept 11: Vanishing Gradient
- Concept 12: Other Activation Functions
- Concept 13: Batch vs Stochastic Gradient Descent
- Concept 14: Learning Rate Decay
- Concept 15: Momentum
- Concept 16: Error Functions Around the World
-
Lesson 04: Sentiment Analysis
In this lesson, Andrew Trask, the author of Grokking Deep Learning, will walk you through using neural networks for sentiment analysis.
- Concept 01: Introducing Andrew Trask
- Concept 02: Meet Andrew
- Concept 03: Materials
- Concept 04: The Notebooks
- Concept 05: Framing the Problem
- Concept 06: Mini Project 1
- Concept 07: Mini Project 1 Solution
- Concept 08: Transforming Text into Numbers
- Concept 09: Mini Project 2
- Concept 10: Mini Project 2 Solution
- Concept 11: Building a Neural Network
- Concept 12: Mini Project 3
- Concept 13: Mini Project 3 Solution
- Concept 14: Understanding Neural Noise
- Concept 15: Mini Project 4
- Concept 16: Understanding Inefficiencies in our Network
- Concept 17: Mini Project 5
- Concept 18: Mini Project 5 Solution
- Concept 19: Further Noise Reduction
- Concept 20: Mini Project 6
- Concept 21: Mini Project 6 Solution
- Concept 22: Analysis: What's Going on in the Weights?
- Concept 23: Conclusion
-
Lesson 05: Keras
In this section you'll get a hands-on introduction to Keras. You'll learn to apply it to analyze movie reviews.
-
Lesson 06: TensorFlow
In this section you'll get a hands-on introduction to TensorFlow, Google's deep learning framework, and you'll be able to apply it on an image dataset.
- Concept 01: Intro
- Concept 02: Installing TensorFlow
- Concept 03: Hello, Tensor World!
- Concept 04: Quiz: TensorFlow Linear Function
- Concept 05: Quiz: TensorFlow Softmax
- Concept 06: Quiz: TensorFlow Cross Entropy
- Concept 07: Quiz: Mini-batch
- Concept 08: Epochs
- Concept 09: Pre-Lab: NotMNIST in TensorFlow
- Concept 10: Lab: NotMNIST in TensorFlow
- Concept 11: Two-layer Neural Network
- Concept 12: Quiz: TensorFlow ReLUs
- Concept 13: Deep Neural Network in TensorFlow
- Concept 14: Save and Restore TensorFlow Models
- Concept 15: Finetuning
- Concept 16: Quiz: TensorFlow Dropout
- Concept 17: Outro
-
Part 08 (Elective): Convolutional Neural Networks
Convolutional neural network is the standard for solving vision problems. It’s used in self driving cars, face recognition, medical imaging, and a whole lot more! You’ll learn how this neural network works and apply to a image classification problem.
-
Module 01: Lessons
-
Lesson 01: Cloud Computing
Take advantage of Amazon's GPUs to train your neural network faster. In this lesson, you'll setup an instance on AWS and train a neural network on a GPU.
-
Lesson 03: Weight Initialization
In this lesson, you'll learn how to find good initial weights for a neural network. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker.
-
Lesson 04: Convolutional Neural Networks
Alexis explains the theory behind Convolutional Neural Networks and how they help us dramatically improve performance in image classification.
- Concept 01: Introducing Alexis
- Concept 02: Applications of CNNs
- Concept 03: How Computers Interpret Images
- Concept 04: MLPs for Image Classification
- Concept 05: Categorical Cross-Entropy
- Concept 06: Model Validation in Keras
- Concept 07: When do MLPs (not) work well?
- Concept 08: Mini Project: Training an MLP on MNIST
- Concept 09: Local Connectivity
- Concept 10: Convolutional Layers (Part 1)
- Concept 11: Convolutional Layers (Part 2)
- Concept 12: Stride and Padding
- Concept 13: Convolutional Layers in Keras
- Concept 14: Quiz: Dimensionality
- Concept 15: Pooling Layers
- Concept 16: Max Pooling Layers in Keras
- Concept 17: CNNs for Image Classification
- Concept 18: CNNs in Keras: Practical Example
- Concept 19: Mini Project: CNNs in Keras
- Concept 20: Image Augmentation in Keras
- Concept 21: Mini Project: Image Augmentation in Keras
- Concept 22: Groundbreaking CNN Architectures
- Concept 23: Visualizing CNNs (Part 1)
- Concept 24: Visualizing CNNs (Part 2)
- Concept 25: Transfer Learning
- Concept 26: Transfer Learning in Keras
-
Lesson 05: Autoencoders
Autoencoders are neural networks used for data compression, image denoising, and dimensionality reduction. Here, you'll build autoencoders using TensorFlow.
-
Lesson 06: Transfer Learning in TensorFlow
In practice, most people don't train their own networks on massive datasets. In this lesson, you'll learn how to use a pretrained network on a new problem with transfer learning.
-
Lesson 07: Deep Learning for Cancer Detection with Sebastian Thrun
In this lesson, Sebastian Thrun teaches us about his groundbreaking work detecting skin cancer with convolutional neural networks.
- Concept 01: Intro
- Concept 02: Skin Cancer
- Concept 03: Survival Probability of Skin Cancer
- Concept 04: Medical Classification
- Concept 05: The data
- Concept 06: Image Challenges
- Concept 07: Quiz: Data Challenges
- Concept 08: Solution: Data Challenges
- Concept 09: Training the Neural Network
- Concept 10: Quiz: Random vs Pre-initialized Weights
- Concept 11: Solution: Random vs Pre-initialized Weight
- Concept 12: Validating the Training
- Concept 13: Quiz: Sensitivity and Specificity
- Concept 14: Solution: Sensitivity and Specificity
- Concept 15: More on Sensitivity and Specificity
- Concept 16: Quiz: Diagnosing Cancer
- Concept 17: Solution: Diagnosing Cancer
- Concept 18: Refresh on ROC Curves
- Concept 19: Quiz: ROC Curve
- Concept 20: Solution: ROC Curve
- Concept 21: Comparing our Results with Doctors
- Concept 22: Visualization
- Concept 23: What is the network looking at?
- Concept 24: Refresh on Confusion Matrices
- Concept 25: Confusion Matrix
- Concept 26: Conclusion
- Concept 27: Useful Resources
- Concept 28: Mini Project Introduction
- Concept 29: Mini Project: Dermatologist AI
-
-
Module 02: Project
-
Lesson 01: CNN Project: Dog Breed Classifier
In this project, you will learn how to build a pipeline to process real-world, user-supplied images. Given an image of a dog, your algorithm will identify an estimate of the canine’s breed.
-
Part 09 (Elective): Recurrent Neural Networks
Recurrent neural network is great for predicting on sequential data like music and text. With this neural network, you can generate new music, translate a language, or predict a seizure using an electroencephalogram. This section will teach you how to build and train a recurrent neural.
-
Module 01: Lessons
-
Lesson 01: Recurrent Neural Networks
Ortal will introduce Recurrent Neural Networks (RNNs), which are machine learning models that are able to recognize and act on sequences of inputs.
- Concept 01: Introducing Ortal
- Concept 02: RNN Introduction
- Concept 03: RNN History
- Concept 04: RNN Applications
- Concept 05: Feedforward Neural Network-Reminder
- Concept 06: The Feedforward Process
- Concept 07: Feedforward Quiz
- Concept 08: Backpropagation- Theory
- Concept 09: Backpropagation - Example (part a)
- Concept 10: Backpropagation- Example (part b)
- Concept 11: Backpropagation Quiz
- Concept 12: RNN (part a)
- Concept 13: RNN (part b)
- Concept 14: RNN- Unfolded Model
- Concept 15: Unfolded Model Quiz
- Concept 16: RNN- Example
- Concept 17: Backpropagation Through Time (part a)
- Concept 18: Backpropagation Through Time (part b)
- Concept 19: Backpropagation Through Time (part c)
- Concept 20: BPTT Quiz 1
- Concept 21: BPTT Quiz 2
- Concept 22: BPTT Quiz 3
- Concept 23: Some more math
- Concept 24: RNN Summary
- Concept 25: From RNN to LSTM
- Concept 26: Wrap Up
-
Lesson 02: Long Short-Term Memory Networks (LSTM)
Luis explains Long Short-Term Memory Networks (LSTM), and similar architectures which have the benefits of preserving long term memory.
- Concept 01: Intro to LSTM
- Concept 02: RNN vs LSTM
- Concept 03: Basics of LSTM
- Concept 04: Architecture of LSTM
- Concept 05: The Learn Gate
- Concept 06: The Forget Gate
- Concept 07: The Remember Gate
- Concept 08: The Use Gate
- Concept 09: Putting it All Together
- Concept 10: Quiz
- Concept 11: Other architectures
- Concept 12: Outro LSTM
-
Lesson 03: Implementation of RNN and LSTM
- Concept 01: Intro
- Concept 02: Character-wise RNNs
- Concept 03: Sequence Batching
- Concept 04: Character-wise RNN Notebook
- Concept 05: Implementing a Character-wise RNN
- Concept 06: Batching Data Solution
- Concept 07: LSTM Cell
- Concept 08: LSTM Cell Solution
- Concept 09: RNN Output
- Concept 10: Network Loss
- Concept 11: Output and Loss Solutions
- Concept 12: Build the Network
- Concept 13: Build the Network Solution
-
Lesson 04: Hyperparameters
In this lesson, we'll look at number of different hyperparameter that are important for our deep learning work. We'll discuss starting values and intuitions for tuning each hyperparameter.
- Concept 01: Introducing Jay
- Concept 02: Introduction
- Concept 03: Learning Rate
- Concept 04: Learning Rate
- Concept 05: Minibatch Size
- Concept 06: Number of Training Iterations / Epochs
- Concept 07: Number of Hidden Units / Layers
- Concept 08: RNN Hyperparameters
- Concept 09: RNN Hyperparameters
- Concept 10: Sources & References
-
Lesson 05: Embeddings and Word2vec
In this lesson, you'll learn about embeddings in neural networks by implementing the word2vec model.
-
Part 10 (Elective): Generative Adversarial Networks
Generative adversarial networks are a type of unsupervised learning where two neural networks compete against each other. This is commonly used to generate image data. You’ll learn how to build your own generative adversarial network and pit two neural networks against each other.
-
Module 01: Lessons
-
Lesson 01: Generative Adversarial Networks
Ian Goodfellow, the inventor of GANs, introduces you to these exciting models. You'll also implement your own GAN on the MNIST dataset.
- Concept 01: Introducing Ian Goodfellow
- Concept 02: What can you do with GANs?
- Concept 03: How GANs work
- Concept 04: Games and Equilibria
- Concept 05: Practical tips and tricks for training GANs
- Concept 06: Build a GAN
- Concept 07: Get started with a GAN
- Concept 08: Generator Network
- Concept 09: Discriminator Network
- Concept 10: Generator and Discriminator Solutions
- Concept 11: Building the Network
- Concept 12: Building the Network Solution
- Concept 13: Training Losses
- Concept 14: Training Optimizers
- Concept 15: Training Losses and Optimizers Solution
- Concept 16: A Trained GAN
- Concept 17: Doing More With Your GAN
-
Lesson 02: Deep Convolutional GANs
In this lesson you'll implement a Deep Convolution GAN to generate complex color images of house numbers.
- Concept 01: Deep Convolutional GANs
- Concept 02: DCGAN Architecture
- Concept 03: Batch Normalization
- Concept 04: DCGAN Implementation
- Concept 05: DCGAN and the Generator
- Concept 06: Generator Solution
- Concept 07: Discriminator
- Concept 08: Discriminator Solution
- Concept 09: Building and Training the Network
- Concept 10: Hyperparameter Solutions
-
Lesson 03: Semi-Supervised Learning
Ian Goodfellow leads you through a semi-supervised GAN model, a classifier that can learn from mostly unlabeled data.
- Concept 01: Semi-supervised Learning
- Concept 02: Semi-Supervised Classification with GANs
- Concept 03: Introducing Semi-Supervised Learning
- Concept 04: Data Prep
- Concept 05: Building The Generator And Discriminator
- Concept 06: Model Loss Exercise
- Concept 07: Model Optimization Exercise
- Concept 08: Training The Network
- Concept 09: Discriminator Solution
- Concept 10: Model Loss Solution
- Concept 11: Model Optimizer Solution
- Concept 12: Trained Semi-Supervised GAN
-
Part 11 (Elective): Deep Reinforcement Learning
Use Reinforcement Learning algorithms like Q-Learning to train artificial agents to take optimal actions in an environment.
-
Module 01: Lessons
-
Lesson 01: Introduction to RL
Reinforcement learning is a type of machine learning where the machine or software agent learns how to maximize its performance at a task.
-
Lesson 02: The RL Framework: The Problem
Learn how to mathematically formulate tasks as Markov Decision Processes.
- Concept 01: Introduction
- Concept 02: The Setting, Revisited
- Concept 03: Episodic vs. Continuing Tasks
- Concept 04: Quiz: Test Your Intuition
- Concept 05: Quiz: Episodic or Continuing?
- Concept 06: The Reward Hypothesis
- Concept 07: Goals and Rewards, Part 1
- Concept 08: Goals and Rewards, Part 2
- Concept 09: Quiz: Goals and Rewards
- Concept 10: Cumulative Reward
- Concept 11: Discounted Return
- Concept 12: Quiz: Pole-Balancing
- Concept 13: MDPs, Part 1
- Concept 14: MDPs, Part 2
- Concept 15: Quiz: One-Step Dynamics, Part 1
- Concept 16: Quiz: One-Step Dynamics, Part 2
- Concept 17: MDPs, Part 3
- Concept 18: Finite MDPs
- Concept 19: Summary
-
Lesson 03: The RL Framework: The Solution
In reinforcement learning, agents learn to prioritize different decisions based on the rewards and punishments associated with different outcomes.
- Concept 01: Introduction
- Concept 02: Policies
- Concept 03: Quiz: Interpret the Policy
- Concept 04: Gridworld Example
- Concept 05: State-Value Functions
- Concept 06: Bellman Equations
- Concept 07: Quiz: State-Value Functions
- Concept 08: Optimality
- Concept 09: Action-Value Functions
- Concept 10: Quiz: Action-Value Functions
- Concept 11: Optimal Policies
- Concept 12: Quiz: Optimal Policies
- Concept 13: Summary
-
Lesson 04: Dynamic Programming
The dynamic programming setting is a useful first step towards tackling the reinforcement learning problem.
- Concept 01: Introduction
- Concept 02: OpenAI Gym: FrozenLakeEnv
- Concept 03: Your Workspace
- Concept 04: Another Gridworld Example
- Concept 05: An Iterative Method, Part 1
- Concept 06: An Iterative Method, Part 2
- Concept 07: Quiz: An Iterative Method
- Concept 08: Iterative Policy Evaluation
- Concept 09: Implementation
- Concept 10: Mini Project: DP (Parts 0 and 1)
- Concept 11: Action Values
- Concept 12: Implementation
- Concept 13: Mini Project: DP (Part 2)
- Concept 14: Policy Improvement
- Concept 15: Implementation
- Concept 16: Mini Project: DP (Part 3)
- Concept 17: Policy Iteration
- Concept 18: Implementation
- Concept 19: Mini Project: DP (Part 4)
- Concept 20: Truncated Policy Iteration
- Concept 21: Implementation
- Concept 22: Mini Project: DP (Part 5)
- Concept 23: Value Iteration
- Concept 24: Implementation
- Concept 25: Mini Project: DP (Part 6)
- Concept 26: Check Your Understanding
- Concept 27: Summary
-
Lesson 05: Monte Carlo Methods
Write your own implementation of Monte Carlo control to teach an agent to play Blackjack!
- Concept 01: Introduction
- Concept 02: OpenAI Gym: BlackjackEnv
- Concept 03: MC Prediction: State Values
- Concept 04: Implementation
- Concept 05: Mini Project: MC (Parts 0 and 1)
- Concept 06: MC Prediction: Action Values
- Concept 07: Implementation
- Concept 08: Mini Project: MC (Part 2)
- Concept 09: Generalized Policy Iteration
- Concept 10: MC Control: Incremental Mean
- Concept 11: Quiz: Incremental Mean
- Concept 12: MC Control: Policy Evaluation
- Concept 13: MC Control: Policy Improvement
- Concept 14: Quiz: Epsilon-Greedy Policies
- Concept 15: Exploration vs. Exploitation
- Concept 16: Implementation
- Concept 17: Mini Project: MC (Part 3)
- Concept 18: MC Control: Constant-alpha, Part 1
- Concept 19: MC Control: Constant-alpha, Part 2
- Concept 20: Implementation
- Concept 21: Mini Project: MC (Part 4)
- Concept 22: Summary
-
Lesson 06: Temporal-Difference Methods
Learn about how to apply temporal-difference methods such as Sarsa, Q-Learning, and Expected Sarsa to solve both episodic and continuous tasks.
- Concept 01: Introduction
- Concept 02: OpenAI Gym: CliffWalkingEnv
- Concept 03: TD Prediction: TD(0)
- Concept 04: Implementation
- Concept 05: Mini Project: TD (Parts 0 and 1)
- Concept 06: TD Prediction: Action Values
- Concept 07: TD Control: Sarsa(0)
- Concept 08: Implementation
- Concept 09: Mini Project: TD (Part 2)
- Concept 10: TD Control: Sarsamax
- Concept 11: Implementation
- Concept 12: Mini Project: TD (Part 3)
- Concept 13: TD Control: Expected Sarsa
- Concept 14: Implementation
- Concept 15: Mini Project: TD (Part 4)
- Concept 16: Analyzing Performance
- Concept 17: Summary
-
Lesson 07: Solve OpenAI Gym's Taxi-v2 Task
With reinforcement learning now in your toolbox, you're ready to explore a mini project using OpenAI Gym!
-
Lesson 08: RL in Continuous Spaces
Review the fundamental concepts of reinforcement learning, and learn how to adapt traditional algorithms to work with continuous spaces.
- Concept 01: Deep Reinforcement Learning
- Concept 02: Resources
- Concept 03: Discrete vs. Continuous Spaces
- Concept 04: Quiz: Space Representations
- Concept 05: Discretization
- Concept 06: Exercise: Discretization
- Concept 07: Tile Coding
- Concept 08: Exercise: Tile Coding
- Concept 09: Coarse Coding
- Concept 10: Function Approximation
- Concept 11: Linear Function Approximation
- Concept 12: Kernel Functions
- Concept 13: Non-Linear Function Approximation
- Concept 14: Summary
-
Lesson 09: Deep Q-Learning
Extend value-based reinforcement learning methods to complex problems using deep neural networks.
- Concept 01: Intro to Deep Q-Learning
- Concept 02: Neural Nets as Value Functions
- Concept 03: Monte Carlo Learning
- Concept 04: Temporal Difference Learning
- Concept 05: Q-Learning
- Concept 06: Deep Q Network
- Concept 07: Experience Replay
- Concept 08: Fixed Q Targets
- Concept 09: Deep Q-Learning Algorithm
- Concept 10: DQN Improvements
- Concept 11: Implementing Deep Q-Learning
- Concept 12: TensorFlow Implementation
- Concept 13: Wrap Up
-
Lesson 10: Policy-Based Methods
Policy-based methods try to directly optimize for the optimal policy. Learn how they work, and why they are important, especially for domains with continuous action spaces.
-
Lesson 11: Actor-Critic Methods
Learn how to combine value-based and policy-based methods, bringing together the best of both worlds, to solve challenging reinforcement learning problems.
-
-
Module 02: Project
-
Lesson 01: Teach a Quadcopter How to Fly
Build a quadcopter flying agent that learns to take off, hover and land using reinforcement learning.
-